Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables. However, such amortized variational inference faces two challenges: (1) the limited posterior expressiveness of fully-factorized Gaussian assumption and (2) the amortization error of the inference model. We present a novel approach that addresses both challenges. First, we focus on ReLU networks with Gaussian output and illustrate their connection to probabilistic PCA. Building on this observation, we derive an iterative algorithm that finds the mode of the posterior and apply full-covariance Gaussian posterior approximation centered on the mode. Subsequently, we present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models. Based on the Laplace approximation of the latent variable posterior, VLAEs enhance the expressiveness of the posterior while reducing the amortization error. Empirical results on MNIST, Omniglot, Fashion-MNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other recent amortized or iterative methods on the ReLU networks.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
我们开发了BenchPress,这是第一个用于编译器的ML基准生成器,它是在源代码的功能空间表示中可检测的。卧推通过在空序列或现有序列的任何部分中添加新代码,通过共同观察其左和右下文,从而综合编译函数,从而达到出色的汇编速率。卧推操纵基准的生成迈向了所需的目标特征,这对于最先进的合成器(或实际上人类)不可能达到。与(a)clgen-最先进的ML合成器,(b)Clsmith Fuzzer,(c)Srciror Mutator或(d)人写代码相比来自Github。 Benchpress是第一个通过主动学习搜索功能空间的生成器,以生成可以改善下游任务的基准。我们展示了Grewe's等人如何使用台式。与其他技术相比,CPU与GPU启发式模型在台式基准测试中进行训练时可以获得更高的加速。卧推是一个强大的代码生成器:其生成的样品以86%的速度编译,而Clgen的2.33%则以86%的速度编译。从一个空的固定输入开始,台式比CLGEN产生的10倍,可汇编的OpenCL基准测试,这些基准比Clgen更大,并且更具多样性。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
鉴于近期对视觉描述符的隐私开启的关于场景启示符的分析,我们开发隐藏输入图像内容的描述符。特别是,我们提出了对培训防止图像重建的视觉描述符的对抗性学习框架,同时保持匹配精度。我们允许一个特征编码网络和图像重建网络彼此竞争,使得特征编码器尝试利用其生成的描述符推出图像重建,而重构器尝试从描述符恢复输入图像。实验结果表明,通过我们的方法获得的视觉描述符显着恶化了对应匹配和相机定位性能的最小影响。
translated by 谷歌翻译
图式是通过允许模型将复杂的任务分解为中间步骤来帮助人工智能的复杂任务的结构化表示。我们提出了一种新颖的系统,它引导了来自网络视频的模式,并通过提高视频检索性能的目标来概括他们捕获看不见的任务。我们的系统在三个主要阶段进行:(1)给定有关相关视频的任务,我们使用联合视频文本模型构造一个任务的初始模式,以匹配具有从WikiHow的文本的文本的视频片段; (2)通过利用语言模型来编辑现有模式中的文本来概括模式以解除任务。通过泛化,我们可以允许我们的模式涵盖具有少量学习数据的更广泛的任务; (3)我们将零拍摄的教学视频检索与未经说明的任务名称进行查询。我们的架构引导方法优于现有的视频检索方法,我们证明我们系统引起的模式优于其他模型产生的方法。
translated by 谷歌翻译
深度学习推荐模型(DLRM)是广泛的,占据了相当多的数据中心足迹,并每年增长超过1.5倍。使用模型尺寸很快在Tberytes范围内,利用存储类(SCM)的推理,可以降低功耗和成本。本文评估将内存层级扩展到DLRM的主要挑战,并提出了通过软件定义内存提高性能的不同技术。我们展示了基础技术,如NAND Flash和3DXP的差异化,并涉及现实世界场景,从而可以节省5%至29%。
translated by 谷歌翻译
Quantum computers promise to enhance machine learning for practical applications. Quantum machine learning for real-world data has to handle extensive amounts of high-dimensional data. However, conventional methods for measuring quantum kernels are impractical for large datasets as they scale with the square of the dataset size. Here, we measure quantum kernels using randomized measurements. The quantum computation time scales linearly with dataset size and quadratic for classical post-processing. While our method scales in general exponentially in qubit number, we gain a substantial speed-up when running on intermediate-sized quantum computers. Further, we efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth. The encoding is characterized by the quantum Fisher information metric and is related to the radial basis function kernel. Our approach is robust to noise via a cost-free error mitigation scheme. We demonstrate the advantages of our methods for noisy quantum computers by classifying images with the IBM quantum computer. To achieve further speedups we distribute the quantum computational tasks between different quantum computers. Our method enables benchmarking of quantum machine learning algorithms with large datasets on currently available quantum computers.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译